The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
TOR(洋葱路由器)网络是一种广泛使用的开源匿名通信工具,滥用Tor使得很难监视在线犯罪的扩散,例如访问犯罪网站。大多数现有的TOR网络去匿名化的批准都在很大程度上依赖手动提取的功能,从而导致耗时和性能差。为了解决这些缺点,本文提出了一种神经表示方法,以根据分类算法识别网站指纹。我们构建了一个基于卷积神经网络(CNN)的新网站指纹攻击模型,并通过扩张和因果卷积,可以改善CNN的感知场并捕获输入数据的顺序特征。三个主流公共数据集的实验表明,与最先进的方法相比,提出的模型对网站指纹分类非常有效且有效,并将准确性提高了12.21%。
translated by 谷歌翻译
近年来,骑车服务的越来越重要表明,有必要研究骑车需求的关键决定因素。然而,关于骑乘需求决定因素的非线性效应和空间异质性,知之甚少。这项研究采用了可解释的基于基础学习的分析框架,以确定塑造骑车需求并在各种空间环境(机场,市区和社区)探索其非线性关联的关键因素。我们在芝加哥使用骑车旅行数据进行实证分析。结果表明,建筑环境的重要性在空间环境中各不相同,并且在预测对机场旅行的乘车需求方面共同贡献了最大的重要性。此外,建筑环境对骑车需求的非线性影响显示出强烈的空间变化。骑车需求通常对市区旅行的建筑环境变化最有反应,然后进行邻里旅行和机场旅行。这些发现提供了运输专业人员的细微见解,以管理骑车服务。
translated by 谷歌翻译
我们引入了基于高斯工艺回归和边缘化图内核(GPR-MGK)的探索性主动学习(AL)算法,以最低成本探索化学空间。使用高通量分子动力学模拟生成数据和图神经网络(GNN)以预测,我们为热力学性质预测构建了一个主动学习分子模拟框架。在特定的靶向251,728个烷烃分子中,由4至19个碳原子及其液体物理特性组成:密度,热能和汽化焓,我们使用AL算法选择最有用的分子来代表化学空间。计算和实验测试集的验证表明,只有313个(占总数的0.124 \%)分子足以训练用于计算测试集的$ \ rm r^2> 0.99 $的精确GNN模型和$ \ rm rm r^2>>实验测试集0.94 $。我们重点介绍了提出的AL算法的两个优点:与高通量数据生成和可靠的不确定性量化的兼容性。
translated by 谷歌翻译
感觉到航天器的三维(3D)结构是成功执行许多轨道空间任务的先决条件,并且可以为许多下游视觉算法提供关键的输入。在本文中,我们建议使用光检测和范围传感器(LIDAR)和单眼相机感知航天器的3D结构。为此,提出了航天器深度完成网络(SDCNET),以根据灰色图像和稀疏深度图回收密集的深度图。具体而言,SDCNET将对象级航天器的深度完成任务分解为前景分割子任务和前景深度完成子任务,该任务首先将航天器区域划分,然后在段前景区域执行深度完成。这样,有效地避免了对前景航天器深度完成的背景干扰。此外,还提出了一个基于注意力的特征融合模块,以汇总不同输入之间的互补信息,该信息可以按顺序推论沿通道沿着不同特征和空间维度之间的相关性。此外,还提出了四个指标来评估对象级的深度完成性能,这可以更直观地反映航天器深度完成结果的质量。最后,构建了一个大规模的卫星深度完成数据集,用于培训和测试航天器深度完成算法。数据集上的经验实验证明了拟议的SDCNET的有效性,该SDCNET达到了0.25亿的平均绝对误差和0.759m的平均绝对截断误差,并通过较大的边缘超过了前期方法。航天器姿势估计实验也基于深度完成结果进行,实验结果表明,预测的密集深度图可以满足下游视觉任务的需求。
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
视频实例细分(VIS)旨在在视频序列中对对象实例进行分类,分割和跟踪。最近基于变压器的神经网络证明了它们为VIS任务建模时空相关性的强大能力。依靠视频或剪辑级输入,它们的潜伏期和计算成本很高。我们提出了一个强大的上下文融合网络来以在线方式解决VIS,该网络可以预测实例通过前几个框架进行逐帧的细分框架。为了有效地获取每个帧的精确和时间一致的预测,关键思想是将有效和紧凑的上下文从参考框架融合到目标框架中。考虑到参考和目标框架对目标预测的不同影响,我们首先通过重要性感知的压缩总结上下文特征。采用变压器编码器来融合压缩上下文。然后,我们利用嵌入订单的实例来传达身份感知信息,并将身份与预测的实例掩码相对应。我们证明,我们强大的融合网络在现有的在线VIS方法中取得了最佳性能,并且比以前在YouTube-VIS 2019和2021基准上发布的剪辑级方法更好。此外,视觉对象通常具有声学签名,这些签名自然与它们在录音录像中自然同步。通过利用我们的上下文融合网络在多模式数据上的灵活性,我们进一步研究了音频对视频密集预测任务的影响,这在现有作品中从未讨论过。我们建立了一个视听实例分割数据集,并证明野外场景中的声学信号可以使VIS任务受益。
translated by 谷歌翻译
引用视频对象分割(R-VOS)旨在分割视频中的对象掩码,并给出将语言表达式转介到对象的情况下。这是最近引入的任务,吸引了不断增长的研究关注。但是,所有现有的作品都有很大的假设:表达式所描绘的对象必须存在于视频中,即表达式和视频必须具有对象级的语义共识。在现实世界中,通常会违反这种表达式的虚假视频,并且由于滥用假设,现有方法总是在此类错误查询中失败。在这项工作中,我们强调研究语义共识对于提高R-VOS的鲁棒性是必要的。因此,我们从没有语义共识假设的R-VOS构成了一个扩展任务,称为Robust R-VOS($ \ Mathrm {R}^2 $ -VOS)。 $ \ mathrm {r}^2 $ - VOS任务与主R-VOS任务的联合建模及其双重问题(文本重建)基本相关。我们接受这样的观察,即嵌入空间通过文本视频文本转换的周期具有关系一致性,该转换将主要问题和双重问题连接起来。我们利用周期一致性来区分语义共识,从而推进主要任务。通过引入早期接地介质,可以实现对主要问题和双重问题的平行优化。收集了一个新的评估数据集,$ \ mathrm {r}^2 $ -Youtube-vos,以测量R-VOS模型针对未配对的视频和表达式的稳健性。广泛的实验表明,我们的方法不仅可以识别出无关表达式和视频的负面对,而且还提高了具有出色歧义能力的正对的分割精度。我们的模型在Ref-Davis17,Ref-Youtube-Vos和Novel $ \ Mathrm {r}^2 $ -Youtube-vos数据集上实现了最先进的性能。
translated by 谷歌翻译